309 research outputs found

    Schwarz Iterative Methods: Infinite Space Splittings

    Full text link
    We prove the convergence of greedy and randomized versions of Schwarz iterative methods for solving linear elliptic variational problems based on infinite space splittings of a Hilbert space. For the greedy case, we show a squared error decay rate of O((m+1)−1)O((m+1)^{-1}) for elements of an approximation space A1\mathcal{A}_1 related to the underlying splitting. For the randomized case, we show an expected squared error decay rate of O((m+1)−1)O((m+1)^{-1}) on a class A∞π⊂A1\mathcal{A}_{\infty}^{\pi}\subset \mathcal{A}_1 depending on the probability distribution.Comment: Revised version, accepted in Constr. Appro

    Stochastic subspace correction in Hilbert space

    Full text link
    We consider an incremental approximation method for solving variational problems in infinite-dimensional Hilbert spaces, where in each step a randomly and independently selected subproblem from an infinite collection of subproblems is solved. we show that convergence rates for the expectation of the squared error can be guaranteed under weaker conditions than previously established in [Constr. Approx. 44:1 (2016), 121-139]. A connection to the theory of learning algorithms in reproducing kernel Hilbert spaces is revealed.Comment: 15 page

    Kernel-based stochastic collocation for the random two-phase Navier-Stokes equations

    Full text link
    In this work, we apply stochastic collocation methods with radial kernel basis functions for an uncertainty quantification of the random incompressible two-phase Navier-Stokes equations. Our approach is non-intrusive and we use the existing fluid dynamics solver NaSt3DGPF to solve the incompressible two-phase Navier-Stokes equation for each given realization. We are able to empirically show that the resulting kernel-based stochastic collocation is highly competitive in this setting and even outperforms some other standard methods

    A representer theorem for deep kernel learning

    Full text link
    In this paper we provide a finite-sample and an infinite-sample representer theorem for the concatenation of (linear combinations of) kernel functions of reproducing kernel Hilbert spaces. These results serve as mathematical foundation for the analysis of machine learning algorithms based on compositions of functions. As a direct consequence in the finite-sample case, the corresponding infinite-dimensional minimization problems can be recast into (nonlinear) finite-dimensional minimization problems, which can be tackled with nonlinear optimization algorithms. Moreover, we show how concatenated machine learning problems can be reformulated as neural networks and how our representer theorem applies to a broad class of state-of-the-art deep learning methods

    CLSVOF as a fast and mass-conserving extension of the level-set method for the simulation of two-phase flow problems

    Get PDF
    The modeling of two-phase flows in computational fluid dynamics is still an area of active research. One popular method is the coupling of level-set and volume-of-fluid (CLSVOF), which benefits from the advantages of both approaches and results in improved mass conservation while retaining the straightforward computation of the curvature and the surface normal. Despite its popularity, details on the involved complex computational algorithms are hard to find and if found, they are mostly fragmented and inaccurate. In contrast, this article can be used as a comprehensive guide for an implementation of CLSVOF into the existing level-set Navier–Stokes solvers on Cartesian grids in three dimensions

    Multilevel preconditioning based on discrete symmetrization for convection-diffusion equations

    Get PDF
    AbstractThe subject of this paper is an additive multilevel preconditioning approach for convection-diffusion problems. Our particular interest is in the convergence behavior for convection-dominated problems which are discretized by the streamline diffusion method. The multilevel preconditioner is based on a transformation of the discrete problem which reduces the relative size of the skew-symmetric part of the operator. For the constant coefficient case, an analysis of the convergence properties of this multilevel preconditioner is given in terms of its dependence on the convection size. Moreover, the results of computational experiments for more general convection-diffusion problems are presented and our new preconditioner is compared to standard multilevel preconditioning
    • …
    corecore